Multimodal Interaction with a Map-Based Simulation System*
نویسنده
چکیده
INTRODUCTION This report describes InterLACE (Interface to LACE), a natural language and graphical interface to the LACE (Land Air Combat in ERIC) military simulation system from Rome Air Development Center, Grifiss Air Force Base, NY [Anken 89]. Our research group at NCARAI has special interest in such related issues as the mental modeling of spatial information, multimodal human-computer interaction, and the linguistics of 2-D and 3-D spatial relations in computerized map-based decision aids, command and control systems, and virtual environments. In particular we wish to investigate how the naturalness of human-computer dialog in multimodal interfaces might be improved by applying principles of human-human discourse understanding. LACE, a map-based military system containing a large real-world cartographic database, seemed like it would serve as an excellent testbed for exploring these issues.
منابع مشابه
Multimodal medical image fusion based on Yager’s intuitionistic fuzzy sets
The objective of image fusion for medical images is to combine multiple images obtained from various sources into a single image suitable for better diagnosis. Most of the state-of-the-art image fusing technique is based on nonfuzzy sets, and the fused image so obtained lags with complementary information. Intuitionistic fuzzy sets (IFS) are determined to be more suitable for civilian, and medi...
متن کاملUni cation-based Multimodal Integration
Recent empirical research has shown conclusive advantages of multimodal interaction over speech-only interaction for map-based tasks. This paper describes a mul-timodal language processing architecture which supports interfaces allowing simultaneous input from speech and gesture recognition. Integration of spoken and gestural input is driven by uniication of typed feature structures representin...
متن کاملUnification-based Multimodal Integration
Recent empirical research has shown conclusive advantages of multimodal interaction over speech-only interaction for mapbased tasks. This paper describes a multimodal language processing architecture which supports interfaces allowing simultaneous input from speech and gesture recognition. Integration of spoken and gestural input is driven by unification of typed feature structures representing...
متن کاملA Unified Framework for Constructing Multimodal Experiments and Applications
In 1994, inspired by a Wizard of Oz (WOZ) simulation experiment, we developed a working prototype of a system that enables users to interact with a map display through synergistic combinations of pen and voice. To address many of the issues raised by multimodal fusion, our implementation employed a distributed multi-agent framework to coordinate parallel competition and cooperation among proces...
متن کاملCollaborative Integration of Speech and 3D Gesture for Map-Based Applications
QuickSet [6] is a multimodal system that gives users the capability to create and control map-based collaborative interactive simulations by supporting the simultaneous input from speech and pen gestures. In this paper, we report on the augmentation of the graphical pen input enabling the drawings to be formed by 3D hand movements. While pen and mouse can still be used for ink generation drawin...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1996